深度神经网络需要特定的层来处理点云,因为点的分散和不规则位置使我们无法使用卷积过滤器。在这里,我们介绍了复合层,该复合层是点云的新卷积操作员。我们的复合层的特殊性是,它在将点与其特征向量结合之前从点位置提取和压缩空间信息。与众所周知的点横向跨层相比,我们的复合层提供了额外的正则化,并确保了参数和参数数量方面的灵活性更大。为了展示设计灵活性,我们还定义了一个集合复合层,该复合层以非线性方式组合空间信息和特征,并且我们使用这些层来实现卷积和聚集的综合材料。我们训练我们的复合烯类进行分类,最引人注目的是无监督的异常检测。我们对合成和现实世界数据集的实验表明,在这两个任务中,我们的CompositeNets都优于表现要点,尽管具有更简单的体系结构,但取得了与KPCONV相似的结果。此外,我们的复合烯类基本上优于现有的解决方案,用于点云上的异常检测。
translated by 谷歌翻译
任何电子设备中包含的芯片都是通过圆形硅晶片制造的,这些芯片是通过不同生产阶段的检查机对其进行监控的。检查机检测并找到晶圆中的任何缺陷,并返回晶圆缺陷图(WDM),即,缺陷为lie的坐标列表,可以将其视为巨大,稀疏和二进制图像。在正常情况下,晶片表现出少量随机分布的缺陷,而以特定模式分组的缺陷可能表明生产线中的已知或新颖类别。不用说,半导体行业的主要关注点是确定这些模式并尽快进行干预以恢复正常的生产条件。在这里,我们将WDM监视作为开放式识别问题,以准确地将WDM分类为已知类别并迅速检测到新颖的模式。特别是,我们提出了一条基于Submanifold稀疏卷积网络的晶圆监测的综合管道,这是一种深层体系结构,旨在以任意分辨率处理稀疏数据,并在已知类别上进行了培训。为了检测新颖性,我们根据拟合在分类器潜在表示上的高斯混合模型定义了一个离群检测器。我们在WDM的真实数据集上进行的实验表明,Submanifold稀疏卷积直接处​​理全分辨率WDMS在已知类别上比传统的卷积神经网络产生了卓越的分类性能,这需要初步的封装以减少代表WDM的二元图像的大小。此外,我们的解决方案优于最先进的开放式识别解决方案,以检测新颖性。
translated by 谷歌翻译
在图像中检测异常区域是工业监测中经常遇到的问题。一个相关的例子是对正常条件下符合特定纹理的组织和其他产品的分析,而缺陷会引入正常模式的变化。我们通过训练深层自动编码器来解决异常检测问题,我们表明,基于复杂的小波结构相似性(CW-SSIM)采用损失函数(CW-SSIM)与传统的自动编码器损失函数相比,这类图像上的检测性能出色。我们对众所周知的异常检测基准测试的实验表明,通过这种损失函数训练的简单模型可以实现可比性或优越的性能,从而利用更深入,更大,更大的计算要求的神经网络的最先进方法。
translated by 谷歌翻译
我们解决了多变量数据流中的在线变更检测问题,并介绍了Quanttree指数加权移动平均值(QT-EWMA),这是一种非参数变更检测算法,可以在误报之前控制预期的时间,从Arl $ _0 $)。在许多应用程序中,控制虚假警报至关重要,很少能通过在线变更检测算法来保证,这些算法可以监视多元数据串联而不知道数据分布。像许多变更检测算法一样,QT-EWMA从固定训练集中构建了数据分布的模型,在我们的情况下,量化量子三直方图。为了监视数据流,即使训练集非常小,我们提出了QT-Ewma-update,该QT-ewma-update在监视过程中会逐步更新Quanttree直方图,请始终保持ARL $ _0 $的控制。我们的实验在合成和真实的数据源上执行,证明了QT-Ewma和Qt-Ewma-update控制ARL $ _0 $和错误警报率比在类似条件下运行的最先进方法更好,从而实现了错误的警报率。较低或可比的检测延迟。
translated by 谷歌翻译
Graph Neural Networks (GNNs) achieve state-of-the-art performance on graph-structured data across numerous domains. Their underlying ability to represent nodes as summaries of their vicinities has proven effective for homophilous graphs in particular, in which same-type nodes tend to connect. On heterophilous graphs, in which different-type nodes are likely connected, GNNs perform less consistently, as neighborhood information might be less representative or even misleading. On the other hand, GNN performance is not inferior on all heterophilous graphs, and there is a lack of understanding of what other graph properties affect GNN performance. In this work, we highlight the limitations of the widely used homophily ratio and the recent Cross-Class Neighborhood Similarity (CCNS) metric in estimating GNN performance. To overcome these limitations, we introduce 2-hop Neighbor Class Similarity (2NCS), a new quantitative graph structural property that correlates with GNN performance more strongly and consistently than alternative metrics. 2NCS considers two-hop neighborhoods as a theoretically derived consequence of the two-step label propagation process governing GCN's training-inference process. Experiments on one synthetic and eight real-world graph datasets confirm consistent improvements over existing metrics in estimating the accuracy of GCN- and GAT-based architectures on the node classification task.
translated by 谷歌翻译
Neuromorphic systems require user-friendly software to support the design and optimization of experiments. In this work, we address this need by presenting our development of a machine learning-based modeling framework for the BrainScaleS-2 neuromorphic system. This work represents an improvement over previous efforts, which either focused on the matrix-multiplication mode of BrainScaleS-2 or lacked full automation. Our framework, called hxtorch.snn, enables the hardware-in-the-loop training of spiking neural networks within PyTorch, including support for auto differentiation in a fully-automated hardware experiment workflow. In addition, hxtorch.snn facilitates seamless transitions between emulating on hardware and simulating in software. We demonstrate the capabilities of hxtorch.snn on a classification task using the Yin-Yang dataset employing a gradient-based approach with surrogate gradients and densely sampled membrane observations from the BrainScaleS-2 hardware system.
translated by 谷歌翻译
Generalisation to unseen contexts remains a challenge for embodied navigation agents. In the context of semantic audio-visual navigation (SAVi) tasks, the notion of generalisation should include both generalising to unseen indoor visual scenes as well as generalising to unheard sounding objects. However, previous SAVi task definitions do not include evaluation conditions on truly novel sounding objects, resorting instead to evaluating agents on unheard sound clips of known objects; meanwhile, previous SAVi methods do not include explicit mechanisms for incorporating domain knowledge about object and region semantics. These weaknesses limit the development and assessment of models' abilities to generalise their learned experience. In this work, we introduce the use of knowledge-driven scene priors in the semantic audio-visual embodied navigation task: we combine semantic information from our novel knowledge graph that encodes object-region relations, spatial knowledge from dual Graph Encoder Networks, and background knowledge from a series of pre-training tasks -- all within a reinforcement learning framework for audio-visual navigation. We also define a new audio-visual navigation sub-task, where agents are evaluated on novel sounding objects, as opposed to unheard clips of known objects. We show improvements over strong baselines in generalisation to unseen regions and novel sounding objects, within the Habitat-Matterport3D simulation environment, under the SoundSpaces task.
translated by 谷歌翻译
Multi-document summarization (MDS) has traditionally been studied assuming a set of ground-truth topic-related input documents is provided. In practice, the input document set is unlikely to be available a priori and would need to be retrieved based on an information need, a setting we call open-domain MDS. We experiment with current state-of-the-art retrieval and summarization models on several popular MDS datasets extended to the open-domain setting. We find that existing summarizers suffer large reductions in performance when applied as-is to this more realistic task, though training summarizers with retrieved inputs can reduce their sensitivity retrieval errors. To further probe these findings, we conduct perturbation experiments on summarizer inputs to study the impact of different types of document retrieval errors. Based on our results, we provide practical guidelines to help facilitate a shift to open-domain MDS. We release our code and experimental results alongside all data or model artifacts created during our investigation.
translated by 谷歌翻译
We consider the problem of two active particles in 2D complex flows with the multi-objective goals of minimizing both the dispersion rate and the energy consumption of the pair. We approach the problem by means of Multi Objective Reinforcement Learning (MORL), combining scalarization techniques together with a Q-learning algorithm, for Lagrangian drifters that have variable swimming velocity. We show that MORL is able to find a set of trade-off solutions forming an optimal Pareto frontier. As a benchmark, we show that a set of heuristic strategies are dominated by the MORL solutions. We consider the situation in which the agents cannot update their control variables continuously, but only after a discrete (decision) time, $\tau$. We show that there is a range of decision times, in between the Lyapunov time and the continuous updating limit, where Reinforcement Learning finds strategies that significantly improve over heuristics. In particular, we discuss how large decision times require enhanced knowledge of the flow, whereas for smaller $\tau$ all a priori heuristic strategies become Pareto optimal.
translated by 谷歌翻译
Post-hoc explanation methods are used with the intent of providing insights about neural networks and are sometimes said to help engender trust in their outputs. However, popular explanations methods have been found to be fragile to minor perturbations of input features or model parameters. Relying on constraint relaxation techniques from non-convex optimization, we develop a method that upper-bounds the largest change an adversary can make to a gradient-based explanation via bounded manipulation of either the input features or model parameters. By propagating a compact input or parameter set as symbolic intervals through the forwards and backwards computations of the neural network we can formally certify the robustness of gradient-based explanations. Our bounds are differentiable, hence we can incorporate provable explanation robustness into neural network training. Empirically, our method surpasses the robustness provided by previous heuristic approaches. We find that our training method is the only method able to learn neural networks with certificates of explanation robustness across all six datasets tested.
translated by 谷歌翻译